Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: redesign cache operator to accept multiple cache adapters #1940

Closed
wants to merge 7 commits into from

Conversation

tklever
Copy link
Contributor

@tklever tklever commented Sep 13, 2016

This is a pretty significant refactor of the cache operator.

As we all know cacheing is a difficult and multifaceted problem, changing from use case to use case. With this in mind, I decided to rebuild the current cache operator to (eventually) accept a new concept, CacheAdapter. This adapter would be passed to the cache operator allowing developers a lot of freedom in their caching solution.

This PR, as it sits, replicates the previous functionality, automatically building a ReplaySubjectCacheAdapter instead of accepting a CacheAdapter as that change would require a BC break to the cache signature. I didn't want to submit that BC break if this idea is completely ridiculous and not fit for inclusion.

Please comb over this with the finest of toothed combs, as some of my decisions in this PR were driven purely by "Well, the tests didn't break, I guess that works".

Please let me know if the idea is even worth exploring, if the implementation is even halfway logical, or of any changes / questions you have that would make this fit to land.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 97.15% when pulling af35a7b on tklever:cache-adapter-operator into d876a35 on ReactiveX:master.

@jayphelps
Copy link
Member

jayphelps commented Sep 13, 2016

@tklever

btw the commit convention is slightly different.

<type>(<scope>): <subject>
e.g.
refactor(operator): redesign cache operator to accept multiple cache adapters

The missing thing is the scope in parenthesis. Which is basically just what area of the code base this affects, so they can be grouped together.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 97.15% when pulling fb54e9b on tklever:cache-adapter-operator into 070e480 on ReactiveX:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 97.148% when pulling 835d40c on tklever:cache-adapter-operator into e91d113 on ReactiveX:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 97.022% when pulling 397b38a on tklever:cache-adapter-operator into 646b38d on ReactiveX:master.

@benlesh
Copy link
Member

benlesh commented Sep 21, 2016

What are the use cases this is meant to cover? Do we have example usage? After we look at that, and decide whether or not this is something we'd want to add, we'll need to add tests for those cases.

I'm on the fence about this change. I think the cache operator will have broad use in it's current form (probably covers the 90% use case).

@tklever
Copy link
Contributor Author

tklever commented Sep 21, 2016

The use case I was trying to work towards was flexibility in implementation.

The cache operator is going to have an intermediary. In it's current form, it uses ReplaySubject. I was seeking to try to get some programmatic control over that subject, allowing for a user to potentially pass in different subjects or varying observers that feed that subject.

Additionally, the cache operator is going to be stateful (as it has to remember things), and I thought this could give power over that state back to the user. If the user supplies the operator with the a custom CacheAdapter, they have the ability to clean it up if something goes awry / they are done with it.

I'm not shocked at your hesitation, even I'm not 100% sold on this, and I wrote it. But I thought I would throw it up and at least have a dialog about it, let smarter people than myself look it over, hopefully learn a thing or two, and then close it or take it in whatever direction it needs to go.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 97.083% when pulling 8393bb9 on tklever:cache-adapter-operator into e20cdb7 on ReactiveX:master.

@coveralls
Copy link

coveralls commented Oct 6, 2016

Coverage Status

Coverage increased (+0.02%) to 97.148% when pulling 835d40c on tklever:cache-adapter-operator into e91d113 on ReactiveX:master.

@tklever
Copy link
Contributor Author

tklever commented Oct 12, 2016

Well then, I guess #2012 pretty much makes this null and void.

This certainly isn't in good enough shape to be "THE" cache operator, it's more of a hack n' slash around the old one (which no longer exists).

Now that "cache" is gone, any suggestions to replicate it's functionality in the mean time? And by functionality, for myself, I just mean basic memoization. What was the last thing emitted.

Other than that, feel free to close this and call it dead, I might take a swing at an alternative proposal, but that will be an entirely separate PR.

@trxcllnt
Copy link
Member

trxcllnt commented Oct 12, 2016

@tklever depending on whether you want the multicast Observable to support retrying/repeating or not, you can use multicast with a Subject selector or Subject instance, respectively. Alternatively, I've had success with this approach to memoizing an Observable sequence, with support for eviction.

@tklever
Copy link
Contributor Author

tklever commented Oct 12, 2016

@trxcllnt That approach is something to behold, although I'm not sure if I totally grasp it's complexity.

It seems to solve a use case far beyond my own, I was seeking a simple alternative to .cache(1).

More or less making an observable chain operate like a BehaviorSubject (with no default). Can that be achieved through your solution (or any other method?).

If not I might monkey with a one-off operator to do so for the time being.

@trxcllnt
Copy link
Member

@tklever yes, source.publishReplay(1).refCount() will cache the most recent event and multicast to >= 1 subscribers.

The downside of publishReplay is that it multicasts with a Subject instance, so if the source errors or completes, subsequent subscriptions to the refCount Observable (like repeat and retry) won't cause the ConnectableObservable to resubscribe to its source.

Since a Subject instance is used (as opposed to a Subject selector, e.g. () => new Subject()), the ConnectableObservable uses the instance for both connections. Subjects are stateful, so once they've been error'd or completed, they stay that way indefinitely.

Multicasting with a Subject selector function allows the ConnectableObservable to create a new Subject instance if the subscriber resubscribes after termination, which is how we can support repeat and retry with ConnectableObservables. The drawback here is if the source errors or completes, the Subject holding all the events you want to replay is thrown away, so you lose the state. There isn't currently an option to only re-subscribe if the Subject is in an error'ed state instead of a completed state (which is what cache was trying to accomplish).

@tklever
Copy link
Contributor Author

tklever commented Oct 13, 2016

@trxcllnt Thank you so much for that help. I really appreciate it. If you ever find yourself in AZ, I owe you many beers.

@benlesh
Copy link
Member

benlesh commented Jan 23, 2018

Closing as stale. Thank you so much for your contribution @tklever, I'm sorry that it didn't work out this time, but it did give us a lot to talk about. I hope to see you contributing again sometime soon.

@benlesh benlesh closed this Jan 23, 2018
@lock
Copy link

lock bot commented Jun 6, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jun 6, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants